Skip to main content

Integração de Dados Processados via BATCH

This page will help you continue with your integration via BATCH

To continue with the integration via BATCH

  1. You can choose to receive the .csv or .parquet file via AWS S3, Google Storage, Azure or direct-link (in this case you will need to have a webhook to receive the files and save them).

  2. You will need to register your credentials depending on the type of integration you chose. To do this, you will need to log in, get the access token and save the credentials using our endpoints. We recommend using Swagger itself to log in and save the credentials. Alternatively, you can use tools such as Postman/Insomnia, or via cURL. More details on how to do this on the pages:

  3. Login and credentials using SWAGGER

  4. Login and credentials using cURL

  5. Together, we need to define a period for sending the files, for example every 4 hours, every 12 hours, among others.

  6. After that, you will start receiving your data in the agreed period, format and storage.

  7. You can check csv file example: Example csv file

  8. You can check some questions about the parquet format: How the .parquet format works